Goto

Collaborating Authors

 causal inference method


bea5955b308361a1b07bc55042e25e54-AuthorFeedback.pdf

Neural Information Processing Systems

We would like to thank all reviewers for their valuable feedback which has helped us improve the paper! Upon acceptance, we will release the code for the model and for the semi-synthetic data generation. Metrics are reported as Mean Std. We will add discussion about this in the conclusion. Evaluation on semi-synthetic data is standard for causal inference methods.


Revolutionizing Clinical Trials: A Manifesto for AI-Driven Transformation

van der Schaar, Mihaela, Peck, Richard, McKinney, Eoin, Weatherall, Jim, Bailey, Stuart, Rochon, Justine, Anagnostopoulos, Chris, Marquet, Pierre, Wood, Anthony, Best, Nicky, Amad, Harry, Piskorz, Julianna, Kacprzyk, Krzysztof, Salama, Rafik, Gunther, Christina, Frau, Francesca, Pugeat, Antoine, Hernandez, Ramon

arXiv.org Artificial Intelligence

Clinical trials are the bedrock of medical practice. They provide a scientifically rigorous way to test the safety and efficacy of new treatments, drugs, and medical devices. Considering the substantial investment required for clinical trials, with Phase III trials often exceeding $500 million and lasting several years [Sertkaya et al., 2024], it is crucial to conduct them with utmost efficiency. Furthermore, they represent a cornerstone in the pharmaceutical industry's ongoing commitment to enhancing patient care strategies. To inform evidence-based practice, trials should accurately represent the diversity and complexity of real-world patient populations, thereby strengthening the evidence for new treatments.


Causal Claims in Economics

Garg, Prashant, Fetzer, Thiemo

arXiv.org Artificial Intelligence

We analyze over 44,000 NBER and CEPR working papers from 1980 to 2023 using a custom language model to construct knowledge graphs that map economic concepts and their relationships. We distinguish between general claims and those documented via causal inference methods (e.g., DiD, IV, RDD, RCTs). We document a substantial rise in the share of causal claims-from roughly 4% in 1990 to nearly 28% in 2020-reflecting the growing influence of the "credibility revolution." We find that causal narrative complexity (e.g., the depth of causal chains) strongly predicts both publication in top-5 journals and higher citation counts, whereas non-causal complexity tends to be uncorrelated or negatively associated with these outcomes. Novelty is also pivotal for top-5 publication, but only when grounded in credible causal methods: introducing genuinely new causal edges or paths markedly increases both the likelihood of acceptance at leading outlets and long-run citations, while non-causal novelty exhibits weak or even negative effects. Papers engaging with central, widely recognized concepts tend to attract more citations, highlighting a divergence between factors driving publication success and long-term academic impact. Finally, bridging underexplored concept pairs is rewarded primarily when grounded in causal methods, yet such gap filling exhibits no consistent link with future citations. Overall, our findings suggest that methodological rigor and causal innovation are key drivers of academic recognition, but sustained impact may require balancing novel contributions with conceptual integration into established economic discourse.


An Introduction to Causal Inference Methods for Observational Human-Robot Interaction Research

Lee, Jaron J. R., Ajaykumar, Gopika, Shpitser, Ilya, Huang, Chien-Ming

arXiv.org Artificial Intelligence

Quantitative methods in Human-Robot Interaction (HRI) research have primarily relied upon randomized, controlled experiments in laboratory settings. However, such experiments are not always feasible when external validity, ethical constraints, and ease of data collection are of concern. Furthermore, as consumer robots become increasingly available, increasing amounts of real-world data will be available to HRI researchers, which prompts the need for quantative approaches tailored to the analysis of observational data. In this article, we present an alternate approach towards quantitative research for HRI researchers using methods from causal inference that can enable researchers to identify causal relationships in observational settings where randomized, controlled experiments cannot be run. We highlight different scenarios that HRI research with consumer household robots may involve to contextualize how methods from causal inference can be applied to observational HRI research. We then provide a tutorial summarizing key concepts from causal inference using a graphical model perspective and link to code examples throughout the article, which are available at https://gitlab.com/causal/causal_hri. Our work paves the way for further discussion on new approaches towards observational HRI research while providing a starting point for HRI researchers to add causal inference techniques to their analytical toolbox.


Validating Causal Inference Methods

Parikh, Harsh, Varjao, Carlos, Xu, Louise, Tchetgen, Eric Tchetgen

arXiv.org Artificial Intelligence

The fundamental challenge of drawing causal inference is that counterfactual outcomes are not fully observed for any unit. Furthermore, in observational studies, treatment assignment is likely to be confounded. Many statistical methods have emerged for causal inference under unconfoundedness conditions given pre-treatment covariates, including propensity score-based methods, prognostic score-based methods, and doubly robust methods. Unfortunately for applied researchers, there is no `one-size-fits-all' causal method that can perform optimally universally. In practice, causal methods are primarily evaluated quantitatively on handcrafted simulated data. Such data-generative procedures can be of limited value because they are typically stylized models of reality. They are simplified for tractability and lack the complexities of real-world data. For applied researchers, it is critical to understand how well a method performs for the data at hand. Our work introduces a deep generative model-based framework, Credence, to validate causal inference methods. The framework's novelty stems from its ability to generate synthetic data anchored at the empirical distribution for the observed sample, and therefore virtually indistinguishable from the latter. The approach allows the user to specify ground truth for the form and magnitude of causal effects and confounding bias as functions of covariates. Thus simulated data sets are used to evaluate the potential performance of various causal estimation methods when applied to data similar to the observed sample. We demonstrate Credence's ability to accurately assess the relative performance of causal estimation techniques in an extensive simulation study and two real-world data applications from Lalonde and Project STAR studies.


Poincare: Recommending Publication Venues via Treatment Effect Estimation

Sato, Ryoma, Yamada, Makoto, Kashima, Hisashi

arXiv.org Machine Learning

Choosing a publication venue for an academic paper is a crucial step in the research process. However, in many cases, decisions are based on the experience of researchers, which often leads to suboptimal results. Although some existing methods recommend publication venues, they just recommend venues where a paper is likely to be published. In this study, we aim to recommend publication venues from a different perspective. We estimate the number of citations a paper will receive if the paper is published in each venue and recommend the venue where the paper has the most potential impact. However, there are two challenges to this task. First, a paper is published in only one venue, and thus, we cannot observe the number of citations the paper would receive if the paper were published in another venue. Secondly, the contents of a paper and the publication venue are not statistically independent; that is, there exist selection biases in choosing publication venues. In this paper, we propose to use a causal inference method to estimate the treatment effects of choosing a publication venue effectively and to recommend venues based on the potential influence of papers.


Microsoft DoWhy is an Open Source Framework for Causal Reasoning

#artificialintelligence

The human mind has a remarkable ability to associate causes with a specific event. From the outcome of an election to an object dropping on the floor, we are constantly associating chains of events that cause a specific effect. Neuropsychology refers to this cognitive ability as causal reasoning. Computer science and economics study a specific form of causal reasoning known as causal inference which focuses on exploring relationships between two observed variables. Over the years, machine learning has produced many methods for causal inference but they remain mostly difficult to use in mainstream applications.


Reflection on modern methods: when worlds collide--prediction, machine learning and causal inference

#artificialintelligence

Causal inference requires theory and prior knowledge to structure analyses, and is not usually thought of as an arena for the application of prediction modelling. However, contemporary causal inference methods, premised on counterfactual or potential outcomes approaches, often include processing steps before the final estimation step. The purposes of this paper are: (i) to overview the recent emergence of prediction underpinning steps in contemporary causal inference methods as a useful perspective on contemporary causal inference methods, and (ii) explore the role of machine learning (as one approach to'best prediction') in causal inference. Causal inference methods covered include propensity scores, inverse probability of treatment weights (IPTWs), G computation and targeted maximum likelihood estimation (TMLE). Machine learning has been used more for propensity scores and TMLE, and there is potential for increased use in G computation and estimation of IPTWs.


Quantifying Error in the Presence of Confounders for Causal Inference

Desai, Rathin, Sharma, Amit

arXiv.org Machine Learning

Estimating average causal effect (ACE) is useful whenever we want to know the effect of an intervention on a given outcome. In the absence of a randomized experiment, many methods such as stratification and inverse propensity weighting have been proposed to estimate ACE. However, it is hard to know which method is optimal for a given dataset or which hyperparameters to use for a chosen method. To this end, we provide a framework to characterize the loss of a causal inference method against the true ACE, by framing causal inference as a representation learning problem. We show that many popular methods, including back-door methods can be considered as weighting or representation learning algorithms, and provide general error bounds for their causal estimates. In addition, we consider the case when unobserved variables can confound the causal estimate and extend proposed bounds using principles of robust statistics, considering confounding as contamination under the Huber contamination model. These bounds are also estimable; as an example, we provide empirical bounds for the Inverse Propensity Weighting (IPW) estimator and show how the bounds can be used to optimize the threshold of clipping extreme propensity scores. Our work provides a new way to reason about competing estimators, and opens up the potential of deriving new methods by minimizing the proposed error bounds.


DoWhy – A library for causal inference - Microsoft Research

#artificialintelligence

For decades, causal inference methods have found wide applicability in the social and biomedical sciences. As computing systems start intervening in our work and daily lives, questions of cause-and-effect are gaining importance in computer science as well. To enable widespread use of causal inference, we are pleased to announce a new software library, DoWhy. Its name is inspired by Judea Pearl's do-calculus for causal inference. In addition to providing a programmatic interface for popular causal inference methods, DoWhy is designed to highlight the critical but often neglected assumptions underlying causal inference analyses.